The Peacemaking Machine: Can AI Improve Democratic Deliberation?

"Democracy relies on the exchange of ideas, conversations, and deliberation. This has been true since Athenian democracy and remains crucial today—perhaps more so than ever. However, meaningful deliberation is becoming increasingly difficult." — Professor Michiel Bakker, MIT/DeepMind 

Giorgia Christiansen

Listen to the AI-generated audio version of this piece. 

On February 27, the Burnes Center for Social Change hosted The Peacemaking Machine, featuring Michael Henry (MH) Tessler and Michiel A. Bakker from MIT and Google DeepMind on Northeastern University’s Boston campus. The discussion explored how artificial intelligence can play a role in fostering consensus in democratic deliberation and whether AI can mediate political discourse in ways that human facilitators cannot.

Screen Shot 2025 03 18 at 3.57.22 Pm

  • Watch the the full video

  • Read the full transcript 

  • Article about the Habermas Machine by Michael Henry Tessler, Michiel A. Bakker, Daniel Jarrett, Hannah Sheahan, Martin J. Chadwick, Raphael Koster, Georgina Evans, Lucy Campbell-Gillingham, Tantum Collins, David C. Parkes, Matthew Botvinick, Christopher Summerfield.

Their talk focused on recent research they conducted in the UK with 5,700 participants, where they investigated whether an AI system can facilitate collective deliberation by mediating discussions on social and political issues. The study focused on four main questions:

  • Can AI-mediated deliberation help groups find common ground?

  • Does AI-mediated deliberation reduce divisions among participants?

  • Does the AI system fairly represent both majority and minority viewpoints?

  • Can AI support deliberation in a setting resembling a citizens' assembly?

Dubbed The Habermas Machine, this AI-driven system is designed to facilitate deliberation and help small groups find common ground on contentious political topics. Named after political theorist Jürgen Habermas, the system uses DeepMind's Chinchilla language model and was tested with over 5,700 participants in the UK. By structuring conversations and synthesizing viewpoints, the AI offers an alternative to traditional debate formats that often reinforce polarization rather than resolve it.

Unlike traditional group deliberation, which can be swayed by dominant voices, the Habermas Machine is designed to produce a written consensus statement that ensures all viewpoints—including minority perspectives—are incorporated into the final consensus. In their experiments, participants in groups of 3-5 people submitted their opinions on political questions, after which the AI generated multiple candidate consensus statements. These statements were then ranked according to predicted participant preferences, with the top-ranked statement selected and refined through participant feedback over multiple rounds. This structured approach provides a level of impartiality that human facilitators may struggle to maintain in highly charged discussions.

A major focus of the talk was how AI can enhance democratic engagement by making deliberation more efficient and accessible. The speakers outlined how traditional deliberation is often costly, slow, and difficult to scale, whereas AI-powered mediation could provide a more structured and scalable alternative. They discussed how the system ranks and refines consensus statements by predicting participant preferences, allowing for a level of deliberative efficiency that is difficult to achieve in purely human-led discussions.

The Habermas Machine represents a potential shift in how democratic deliberation is conducted:

  • It structures discussions to reduce polarization and synthesize viewpoints into a common statement.

  • AI mediation ensures all voices are considered, preventing dominant speakers from overshadowing minority perspectives.

  • The model refines statements iteratively, improving clarity and consensus over multiple rounds of deliberation.

Beyond its technical capabilities, the discussion also highlighted concerns around AI's role in political discourse.

  • Ethical risks include algorithmic bias, transparency issues, and the possibility of reinforcing existing power imbalances.

  • While AI can depolarize discussions by framing areas of agreement, it must not force compromise on deeply held beliefs.

  • There is ongoing debate about how AI-driven systems could be integrated into formal policymaking and governance.

Screen Shot 2025 03 18 at 3.38.23 Pm

Bakker and Tessler acknowledged that algorithmic bias in AI mediation arises when the system unintentionally favors certain viewpoints over others, often reflecting the biases present in its training data or in the deliberation process itself. In the case of the Habermas Machine, there is a risk that AI could prioritize majority opinions, subtly marginalizing minority perspectives rather than giving them equal weight. Additionally, if the AI’s ranking and refinement processes are not fully transparent, it may be difficult to detect whether certain voices are being systematically overlooked. This could reinforce existing power imbalances rather than creating truly equitable discussions, making it essential to design AI systems that actively mitigate bias rather than perpetuate it.

While the system has shown promising results, questions remain about how well it would function in larger groups or across diverse cultural contexts. The conversation touched on the need for further research into how AI could be integrated into formal democratic processes, such as citizens' assemblies or government policymaking forums.

Screen Shot 2025 03 18 at 3.40.42 Pm

Screen Shot 2025 03 18 at 3.41.47 Pm

The discussion continued with thought-provoking questions from the audience, including:

  • Costas Panagopoulos (Political Science, Northeastern University) asked how AI deliberation compares to traditional deliberative approaches in terms of fostering attitude change, particularly in deeply polarized topics like Brexit. Michael Henry Tessler noted that while the AI-mediated deliberation showed small but significant shifts in opinion, deeply entrenched issues remained largely unaffected.

  • Jose L. Marti (Pompeu Fabra University, Barcelona) raised concerns about whether the AI system truly facilitates deliberation in a Habermassian sense. He questioned whether the AI helps participants reach agreements through reasoned argumentation, or if it simply aggregates and refines existing common ground without fostering deeper deliberative engagement. He also inquired about the system’s ability to scale to larger groups. Tessler responded by acknowledging that the approach is "minimally deliberative" and that some aspects of face-to-face deliberation—such as emotional cues and interpersonal bonding—are lost in digital mediation. However, he emphasized the potential for hybrid models that integrate AI while preserving key elements of traditional discussion.

Moderated by Beth Simone Noveck, this conversation was part of The Rebooting Democracy in the Age of AI lecture series, hosted by the Burnes Center for Social Change, The GovLab, the Internet Democracy Initiative, and the Institute for Experiential AI at Northeastern University. The event underscored the critical role of AI in reshaping political discourse while also highlighting the need for responsible AI governance and oversight. 

The series continues on March 20, 5 p.m. ET, with Ed Bice from Meedan, who will discuss using AI to improve equity, accessibility, and credibility in election-related information.

Sign up for all upcoming events: rebootdemocracy.ai/events

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.